11 research outputs found
Learning End-to-End Goal-Oriented Dialog with Multiple Answers
In a dialog, there can be multiple valid next utterances at any point. The
present end-to-end neural methods for dialog do not take this into account.
They learn with the assumption that at any time there is only one correct next
utterance. In this work, we focus on this problem in the goal-oriented dialog
setting where there are different paths to reach a goal. We propose a new
method, that uses a combination of supervised learning and reinforcement
learning approaches to address this issue. We also propose a new and more
effective testbed, permuted-bAbI dialog tasks, by introducing multiple valid
next utterances to the original-bAbI dialog tasks, which allows evaluation of
goal-oriented dialog systems in a more realistic setting. We show that there is
a significant drop in performance of existing end-to-end neural methods from
81.5% per-dialog accuracy on original-bAbI dialog tasks to 30.3% on
permuted-bAbI dialog tasks. We also show that our proposed method improves the
performance and achieves 47.3% per-dialog accuracy on permuted-bAbI dialog
tasks.Comment: EMNLP 2018. permuted-bAbI dialog tasks are available at -
https://github.com/IBM/permuted-bAbI-dialog-task
Evaluating Robustness of Dialogue Summarization Models in the Presence of Naturally Occurring Variations
Dialogue summarization task involves summarizing long conversations while
preserving the most salient information. Real-life dialogues often involve
naturally occurring variations (e.g., repetitions, hesitations) and existing
dialogue summarization models suffer from performance drop on such
conversations. In this study, we systematically investigate the impact of such
variations on state-of-the-art dialogue summarization models using publicly
available datasets. To simulate real-life variations, we introduce two types of
perturbations: utterance-level perturbations that modify individual utterances
with errors and language variations, and dialogue-level perturbations that add
non-informative exchanges (e.g., repetitions, greetings). We conduct our
analysis along three dimensions of robustness: consistency, saliency, and
faithfulness, which capture different aspects of the summarization model's
performance. We find that both fine-tuned and instruction-tuned models are
affected by input variations, with the latter being more susceptible,
particularly to dialogue-level perturbations. We also validate our findings via
human evaluation. Finally, we investigate if the robustness of fine-tuned
models can be improved by training them with a fraction of perturbed data and
observe that this approach is insufficient to address robustness challenges
with current models and thus warrants a more thorough investigation to identify
better solutions. Overall, our work highlights robustness challenges in
dialogue summarization and provides insights for future research